TAGGED IN

Center for technology

  1. How Meta Thinks About Personalization and Privacy

    Personalization, which tailors content based on user preference, has become widely used on virtually every social media platform. By providing users with relevant content that appeals to their unique interests, no two social media feeds are the same. Personalized social media posts can lead to a 50 percent increase in user engagement, as they resonate more deeply with individual preferences. This practice is so widely used that it has become a deep-seated expectation for users—74 percent of consumers feel frustrated when content isn’t personalized. But as social media platforms integrate personalization technology, questions around privacy, transparency, and user choice are becoming increasingly pronounced. Rob Sherman is Vice President and Deputy Chief Privacy Officer for Policy at Meta. He joined the company in 2012—at the time called Facebook—and has since then diligently worked to protect user privacy while promoting innovation. Before joining Meta, Rob was a lawyer at Covington & Burling LLP, specializing in technology and media. Below is a lightly edited and abridged transcript of our discussion. You can listen to this and other episodes of Explain to Shane on AEI.org and subscribe via your preferred listening platform. If you enjoyed this episode, leave us a review, and tell your friends and colleagues to tune in. Shane Tews: Let’s walk through how personalization works on social media apps, and how it can provide users with a better experience. Rob Sherman: It’s exciting that when I think about how my kids are growing up, they’re growing up with so much choice and so much ability to get the experience that works for them. That’s the same experience that I have now. So when I look at my Instagram feed, I am interested in travel. And so I get a lot of information about travel. I’m interested in things to do with my kids and I get information about that. I’m a vegetarian. So I get stuff about vegetarian recipes, like these are the kinds of things that I see and choose to consume on Instagram. The benefit of this is you can actually curate the experience that you want to have, and in general, your experience is going to be different than mine. You choose who you want to follow and that gives a signal to our systems of this is the kind of content that Shane is interested in. You might click on certain things more than others. You might choose to comment on or like things. Those are all signals that help our system decide what to show you. And the goal is really to give you an enriching experience that gives you the content that is most important for you. This personalization also informs ad choices. I think a lot of people don’t know that they can find out almost everything that Instagram or Facebook knows about them and if I don’t like something, they can augment it or change it. Explain to our listeners how they can find the inventory of what information Meta properties have about them. Yes, when you see an ad on Instagram, there’s a way that you can find out why you saw that ad. Ideally, the ads that you’re seeing are really useful, valuable to you. They’re things that you would want to buy. I just actually found a Mother’s Day present for my wife, which she doesn’t know about yet, but will find out in a couple of days because we’re recording this right before Mother’s Day. So I found that in an Instagram ad. But ideally, you’re getting those because they’re things that you would actually want. But you can also click on the ad and say, why am I seeing this? And you’ll get an explanation for what are the factors that we considered to think that you might want this. And then if you disagree, if we actually got it wrong, you can give us that feedback as well. So the idea is really, rather than all of us having to have that same content, we can each have the experience that we want, and then we can have a say in curating it in the way that is best for us. This is what we call ad preferences. This system is one of the factors that informs what ads you see. What it’s doing is it’s using a combination of things I’m interested in, the things I explicitly choose to follow the and things that I engage with, to decide what it’s going to want to surface to me. Part of that is just looking at, if people who like this particular page on Facebook are likely to be interested in these topics. Some of it is general, relating to broad populations. If you go into your ad settings, there’s a place where you can see a list of those topics. And then, like I said earlier, this will give you information about how those interests or other things you engaged with informed our choice to show you that particular ad at that moment. By default, it’s built to do the heavy lifting for you. The idea is to deliver personalization for everyone in a way that works for them, but then also give them the ability to dig in if they want to. So that brings me to a question of how things work on the back end. As you have acquired different companies to become part of your suite of services, do my preferences follow me? There’s no reason for you to have separate technical teams for each platform, such as Facebook, Instagram, WhatsApp, and Oculus, because they share a lot of the same information. But have you encountered anything that users might be concerned about? One of the things I think is important is that when we’re building this technology, it’s increasingly working together. However, it’s also important to note that we have a feature called Account Center, which allows you to link your accounts together in the way you’re describing. If I would rather have my Instagram account be totally separate from my Facebook account, that’s absolutely something that I can do. I think from a starting point, giving people the ability to decide how they want to use different products together is an important piece. Most of us have different ways that we engage with the different platforms. So one of the things that we think about is how do we segregate that data and make sure that it’s used to deliver the services that you’re actually looking for and that the information is being used in ways that you expect and want. We try to be really transparent about that and make sure people know and have choices about it. Actually, one of the primary focuses of our privacy engineering team is building back-end technologies to ensure that data is used in the intended manner and not misused in other ways. Switching topics, because I know you just had a Llamacon, and AI is everyone’s favorite topic right now. What’s going on with Llama? I was just in California for LlamaCon, which is our first Llama developers’ conference. And it was really great to be together with developers from around the world who are using Llama to build incredible things. One of the things that we try to do is to both deploy technology and then provide support to the ecosystem to help this technology (in this case open-source AI) to be valuable and create value for people around the world. One of the grants that we gave out through our Llama Impact Program was for a developer that’s using Llama to help people get access to government services and understand how to navigate the various programs that are available to them that they might not know how to get access to. In India, there’s a developer that’s using Llama to deliver personalized language and literacy instruction. In a country where you might not have the scale for every kid to have a teacher who can give them personalized experience, being able to do that on WhatsApp from your phone is actually really powerful. One of the things that I think is particularly important about the idea of open-sourcing is, if you think about a company like ours, we’re not education experts, to use the example that I just gave. These are kinds of things that we would never have the expertise to build ourselves, but by deploying this technology and open-sourcing it, we can actually enable the ecosystem to build on top of it and do all these really neat things. Future-casting here: anything that you know that I don’t know that we should talk about? I think the thing that I took away from LlamaCon more than anything else is how broad and diverse the uses of this technology are. There were lots of different developers there, but the small developers who were doing really unique, really interesting things, stuck out to me. We had poster presentations where different developers could demonstrate what they were doing. And one of the developers that I ran into was building American Sign Language on Llama using WhatsApp. And so what it meant was you would have the ability to type something in and it would demonstrate how to say that in American Sign Language, but then you could also use your camera to record somebody signing to you and then it would translate that into written text. This was a small developer that didn’t have a lot of resources, but was bootstrapped by being able to build on top of Llama. That’s really, really incredible to think about what we’re able to do. When I look toward the future, one of the big challenges when we look at this new technology is that I think it’s going to give us a lot more choice. And I think that this technology is going to give each of us the ability to not have to rely on a developer to build technology exactly for our use case, but actually to be able to just tell the computer what you want and. I also think there are benefits of having that technology integrated into your life, like you and me being able to use wearable technology to talk to each other, even if we’re not in the same place. I do think the big challenge, though, is that getting this right is going to require us to challenge the orthodoxy of our instinctive answers to a lot of these questions. Learn more: Agents, Access, and Advantage: Lessons from Meta’s LlamaCon | Rebuilding the Transatlantic Tech Alliance: Why Innovation, Not Regulation, Should Guide the Way | My AI Advisers: Lessons from a Year of Expert Digital Assistants | Why Meta’s Change in Fact-Checking Is Good for Democracy The post How Meta Thinks About Personalization and Privacy appeared first on American Enterprise Institute - AEI.

    Generative AI and Fabricated Judicial Opinions: A Slow Learning Curve for Some Attorneys

    On the final day of my civil procedure course, Professor Brian Landsberg offered a piece of advice. At first blush, it seemingly had nothing to do with the myriad federal rules and landmark cases like Pennoyer v. Neff that we’d studied. Yet, it’s a pearl of wisdom I remember more than 35 years later: Never take square corners roundly. As anxious first-year law students, we were probably tempted to ask Professor Landsberg whether that maxim would appear on the exam. (Thankfully, no one did.) What the civil rights litigator who apparently got stuck teaching civ pro meant, of course, was don’t take shortcuts when it comes to the law and be sure to scrupulously follow the rules. Via Twenty20. I’m reminded of Professor Landsberg’s cogent counsel because some attorneys continue to take shortcuts in legal research by using generative artificial intelligence (Gen AI) tools when searching for cases to support their motions and, unfortunately, failing to verify whether they are real. By now, all practicing attorneys should know that Gen AI tools sometimes “hallucinate” (a kinder, gentler way of saying fabricate or make up) non-existent opinions. Not taking the time to confirm whether cases spat out by Gen AI tools are genuine is like a law firm partner failing to check the work of a first-year associate who just passed the bar exam. I first addressed the problem in June 2023, describing how a federal judge in Manhattan had sanctioned two attorneys for including Gen AI-produced fake judicial opinions in a case called Mata v. Avianca, Inc. As US District Judge P. Kevin Castel put it, “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” Castel added that citing “bogus opinions” not only “wastes time and money in exposing the deception,” but also “promotes cynicism about the legal profession and the American judicial system.” In January 2024, I discussed how Michael Cohen, the now-disbarred attorney who formerly worked for Donald Trump, used a Gen AI tool that produced three non-existent opinions that Cohen then passed along to his attorney who, in turn, incorporated them into a legal filing. Although neither Cohen nor his attorney was later sanctioned, US District Judge Jesse Furman in March 2024 called the incident “embarrassing” and wrote that “[g]iven the amount of press and attention that Google Bard and other generative artificial intelligence tools have received, it is surprising that Cohen believed it to be a ‘super-charged search engine’ rather than a ‘generative text service.’” In July 2024, the American Bar Association (ABA) issued a formal opinion regarding attorneys’ usage of Gen AI tools. It asserts that: Because [Gen AI] tools are subject to mistakes, lawyers’ uncritical reliance on content created by a [Gen AI] tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties. Therefore, a lawyer’s reliance on, or submission of, a [Gen AI] tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation . . . The opinion goes on to stress that “[a]s a matter of competence . . . lawyers should review for accuracy all [Gen AI] outputs.” Unfortunately, news broke in February of yet another incident of attorneys stuffing motions with fake cases produced by Gen AI tools. This incident involved a major firm, Morgan & Morgan, that calls itself “America’s Largest Injury Law Firm” and says it tries “more cases than any other firm in the country.” According to an order to show cause filed on February 6 by US District Judge Kelly Rankin of Wyoming, a motion submitted by Morgan & Morgan and the Goody Law Group in Wadsworth v. Walmart, Inc. cited a whopping nine cases that simply don’t exist. Four days later, the plaintiffs’ attorneys jointly responded, acknowledging the cases “were not legitimate” and explaining that “[o]ur internal artificial intelligence platform ‘hallucinated’ the cases in question while assisting our attorney in drafting the motion.” They dubbed it “a cautionary tale for our firm and all firms, as we enter this new age of artificial intelligence.” Unfortunately, the cautionary tale had already occurred more than 18 months earlier in Mata v. Avianca, Inc. noted above. It generated coverage in The New York Times and an article in The National Law Review headlined “A ‘Brief’ Hallucination by Generative AI Can Land You in Hot Water.” The plaintiffs’ attorneys in Wadsworth, three of whom were individually sanctioned by Rankin with minimal fines (neither Morgan & Morgan nor the Goody Law Firm was sanctioned), seemingly missed the news. Going forward, they should heed the ABA’s opinion and Professor Landsberg’s advice about not taking shortcuts. Learn more: Regulating Complex and Uncertain AI Technologies | What a Novel-Writing Organization’s Demise Teaches Us About AI | Free Speech Tradeoffs and Role-playing Chatbots: Sacrificing the Rights of Many to Safeguard a Few? | ​​Generative AI: The Emerging Coworker Transforming Teams The post Generative AI and Fabricated Judicial Opinions: A Slow Learning Curve for Some Attorneys appeared first on American Enterprise Institute - AEI.

    No More Tappers: Get Skin in the Game

    Irony of ironies: Outrage around Jake Tapper and Alex Thompson’s book, Original Sin, is helping to sell more copies. The failure of a CNN anchor and an Axios reporter to cover President Joe Biden’s infirmities in a timely manner now delivers them free advertising for their book and a bigger haul. There is a partial fix for such warp: a media environment where people have skin in the game. But in the United States the next era of news-gathering and dissemination might be against the law. The meta-revelation is as important as the revelation. President Biden was suffering declining acuity in office, and the media didn’t pick it up or report on it. The Tapper–Thompson book explores how that came to be. Via AP Photo. It is not lost on the commentariat that the authors are confessing their own failings and the failings of DC’s journalists generally. On the left, Jon Stewart issued a characteristic diatribe. Megyn Kelly went after Tapper from the right. We all have reason to lack confidence in our institutions these days. They need some fixing. Today’s more open and decentralized media environment allows for a greater menu of information options, up to and including literal “fake news.” We should not only bear the costs. There are things that could help us grow into an improved media environment and reap its benefits. The incentives for accurate—and intense—research and reporting are clearest in financial markets. Everyone should know the story of Hindenburg Research. The combined media and investment firm investigated public companies, seeking to hasten the fall of the badly run ones while selling their stocks short. Hindenburg’s 2021 blog post about Clover Health is an example of their craft and a dive into the firm’s philosophy. Early this year, its founder, Nathan Anderson, shuttered Hindenburg. I assume, but don’t know, that he simply has enough money now. What if reporters—or anyone—could make money by bringing true information forward? Particularly with something as momentous as a president’s mental health, there is a lot of value to our society from having better information. A system for rewarding people with good information already exists. The leading example is Polymarket, a platform where people can make bets on future events. A recent New York Magazine piece says: Polymarket is unlike other gambling forums in that it’s not a bookie and it doesn’t set the odds of its markets. Each market is simply structured as a question on which you can bet “yes” or “no.” Will Volodymyr Zelenskyyy apologize to Trump? Will Fyre Festival 2 sell out? Will TikTok be banned again before May? Imagine the opportunity for reporters—and for White House staff, visitors to the Oval Office, and political supporters—if they felt they had better information than others about something so important. They could profit from bringing it forward by putting money on the line about future events. But here’s how the New York Magazine piece opens: At 6 a.m. on Wednesday, November 13, eight FBI agents in black windbreakers burst through the door of Shayne Coplan’s Soho apartment with a battering ram, surprising him and his girlfriend in bed. They seized his phone from the bedside table but wouldn’t let him touch it, not even to unlock it, lest he destroy evidence that might criminally implicate him or his company, Polymarket, the popular betting platform that over a week before had set off celebrations at Mar-a-Lago when it showed Donald Trump winning the presidential election well before the networks did. Polymarket is close enough to illegal in the United States that the company endeavors to exclude US users. Last August, eight Democratic members of Congress asked the Commodity Futures Trading Commission to make sure that political prediction markets were illegal. (Tongue in cheek: Did Senator Warren (D-MA) sign on because of dismay with the poor odds Polymarket users gave her crypto legislation a couple months earlier?) I do not think prediction markets are gambling. These are games of skill, not chance—and they are not games, but a new information stew with heavy flavors of journalism and investment. There is a lot to learn about prediction markets and their permutations for good and evil. (The paragon of potential evil usage is the “assassination market.”) Tremendous value for society can be unlocked by lining up incentives to discover and publish true information. There is no better illustration of the need than the case of President Biden’s declining acuity, hidden in plain sight of what should be the world’s best and most dogged journalists. I send my best wishes to Joe Biden with respect to his recently reported health issues. Learn more: “Misinformation” Is Condescending: Do Better, Elites | DOGE, Open Up the MAX Database! | Haste Controls Waste! A Theory of Reform | Brilliant Ideas on the Cutting Room Floor The post No More Tappers: Get Skin in the Game appeared first on American Enterprise Institute - AEI.

    Regulating Complex and Uncertain AI Technologies

    A common cognitive bias, in which decision-makers unconsciously substitute a complex problem with a simpler, related one, was first described in 2002 by Daniel Kahneman and Shane Frederick. The concept of attribute substitution explains that, when faced with a complex judgment (target attribute), some may replace it with a more accessible, simpler judgment (heuristic attribute) without realizing it—inadvertently responding only to the simpler problem. This process is automatic and often goes undetected by the individual, leading to systematic cognitive biases. Similarly, policymakers’ tendencies to regulate new technologies based on their experiences with related legacy systems has become evident in recent history. New broadband technologies operating across multiple infrastructures have been regulated as if they were legacy telephony systems entrenched in infrastructure monopolies—the ill-fated unbundling and access regulation rules that would have worked well in the telephony systems have, in broadband, significantly deterred investment in rival fiber and cable infrastructures in the European Union. The United States avoided this fate only because real competition from unregulated cable operators already existed. The subsequent decision not to bind information services with the telephony regulatory legacy became the solution. Going back a little further and considering regulations designed to keep people “safe” from the dangers of a new technology, we find the regulatory response to the emergence of a new general purpose technology—the locomotive (self-powered vehicle). The pioneering UK Locomotives Act 1865 required that a person, while any Locomotive is in motion, shall precede such Locomotive on Foot by not less than Sixty Yards, and shall carry a Red Flag constantly displayed, and shall warn the Riders and Drivers of Horses of the Approach of such Locomotives, and shall signal the Driver thereof when it shall be necessary to stop, and shall assist Horses, and Carriages drawn by Horses, passing the same. Via Look and Learn. The effect was that the locomotive (and subsequently, horseless carriages—or cars) could go no faster than a person could walk. It was repealed in 1896, some considerable time after steam- and internal combustion engine-powered vehicles were on the road. As the image shows, Vermont passed a similar red flag law in 1894, but it was repealed just two years later. Problems with these laws arose because horses and carriages had existing road use rights that regulators could not violate—including going fast. But the new locomotive technologies could be regulated, so rules were implemented that the regulators would have liked to impose on horse-drawn carriages—namely, limiting their speed to mitigate the costs of fast-travelling horses getting out of the control of the driver and harming pedestrians. However, the red flag laws failed on three counts. First, they prevented society from benefiting from the features of the new technologies—in this case, faster travel. Second, they were regulating an issue that was far less likely to occur with a locomotive than a horse and carriage; unlike a horse, which has a mind of its own, the locomotive posed a much lower probability of escaping the driver’s control. Third, the regulations shaped the perceptions and behaviors of road users in regard to the new technology. Their understanding of the locomotive was developed under the tightly regulated oversight of the man with the flag. When the rules were repealed, they had no idea that the locomotive could go fast, and they did not take evasive action quickly enough. Many more people were harmed because the laws disincentivized necessary learning. Likewise, a real risk exists in the rush to regulate new artificial intelligence (AI)—including generative pre-trained transformers (AI GPTs) such as ChatGPT and Llama. Decision-makers are substituting their understanding of constraining Good Old-Fashioned AI (GOFAI) big data tools—which respond well to risk management aligned with advancing engineering precision in computing—with the understanding necessary to govern applications in the face of uncertainty and near-infinite variety invoked by the AI GPTs. The complex intertwining of these AI GPTs, where even application developers and the AIs themselves cannot explain how or why they come to certain outcomes, with equally complex human commercial and social systems is unprecedented. Regulating AI GPTs as if they were GOFAIs invokes all three risks of the red flag laws: New benefits from novel technologies will be lost, the potential harms are not necessarily the same, and people’s behavior will change in the presence of the new rules. The new world created with AI GPTs is certainly uncertain, so regulation should be guided by knowledge of its complexity and uncertainty—resisting the rush to regulate and, instead, learning through experiences of the new and not past fears of the old. Learn more: China’s AI Strategy: Adoption Over AGI | How Much Might AI Legislation Cost in the US? | The Best AI Law May Be One That Already Exists | AI’s Emerging Paradox The post Regulating Complex and Uncertain AI Technologies appeared first on American Enterprise Institute - AEI.

    The House Should Act Quickly to Repeal the Illegal, Expensive E-Rate Expansion

    Earlier this month, the Senate passed S.J.Res.7. The resolution, sponsored by Senator Ted Cruz, would repeal a Biden-era Federal Communications Commission (FCC) rule allowing E-Rate funds to subsidize Wi-Fi hotspot lending programs for off-campus use. This well-intentioned but misguided rule violates clear statutory limits on agency power and threatens an increasingly unstable Universal Service Fund (USF). The House should follow the Senate’s lead to revoke this initiative before the estimated June 4 deadline for congressional action. Established in 1996, the E-Rate program initially provided USF funding to bring broadband service to the nation’s schools and libraries—a buildout that was largely completed by 2006. Today it reimburses local governments for between 20 and 90 percent of the costs of serving these institutions, spending $2.68 billion in 2024 even though, as I’ve noted elsewhere, it is unclear whether this spending improves student learning outcomes. In 2024, the FCC expanded this program to subsidize Wi-Fi hotspots for off-premises use by students and library patrons. The agency’s Republican commissioners dissented—and their concerns about the rule’s legality and wisdom were echoed by Senator Cruz. Via Adobe Stock. Cruz’s resolution leverages the Congressional Review Act (CRA), a powerful yet rarely-used legislative mechanism to check agency overreach. The CRA allows Congress to nullify major federal rules issued within a specific timeframe by a simple majority vote, bypassing the Senate filibuster. But success requires either presidential approval or a two-thirds vote of both the House and Senate to override a veto, which is rare given that agency leadership, appointed by the president, usually aligns with White House objectives. The exception is when the White House changes parties, creating a window within which Congress and the incoming president can strike regulations passed in the twilight of the previous administration. In both of his terms, President Donald Trump has made unparalleled use of the CRA process to nullify midnight rules passed by his predecessors. This seems a paradigmatic case for invoking the CRA, as the agency overstepped clear congressional boundaries. Section 254 established E-Rate as a facilities-based initiative: Its mission is to enhance broadband access specifically to “classrooms” and “libraries.” The FCC seeks to get around this limitation by asserting that “learning is no longer confined to the physical school or library building during regular operating hours” and that many students and patrons lack internet access at home. But E-Rate does not have a mandate to facilitate learning wherever it may be accomplished, and the argument that in the digital era, everything is a classroom makes a mockery of congressional intent, especially in the post‑Chevron era. Admittedly, the FCC is correct that much learning occurs at home, and some students are disadvantaged by lack of home broadband connectivity. This was especially true during the COVID pandemic, when most schools and libraries were closed. Congress recognized this through the Emergency Connectivity Fund (ECF), which expressly included pandemic-era funding for schools and libraries to loan Wi-Fi hotspots for off-premises use. But Congress placed limits on this program: ECF was subject to a fixed budget from appropriations, and was available only until the pandemic emergency ended in May 2023. The pandemic-era ECF program reinforces the notion that Congress did not intend the E-Rate program to generally fund hotspots for off-premises use. If E-Rate had pre-existing authority to fund hotspot lending programs, then Congress would not have needed to establish such a program as part of the ECF initiative. To the extent that there is any lingering ambiguity, the ECF program itself makes this distinction clear: While Congress funded its pandemic-era hotspot lending program through appropriations, it expressly stated that support for this initiative “shall be provided from amounts made available from the Emergency Connectivity Fund and not from contributions under Section 254(d).” This limitation highlights the fiscal dangers of the FCC’s hotspot lending rule. Unlike ECF funding, the Universal Service Fund is funded through a surcharge on consumer telecommunications bills. This funding mechanism is already unsustainable: Rising program costs and a declining revenue base have driven the surcharge from 3 percent in 1998 to 36.6 percent today. Expanding a $2.6 billion program to include home Wi-Fi hotspots nationwide would increase this further and place a greater burden on consumers—which is precisely what the ECF bill sought to avoid. Congress could simply wait for FCC Chairman Brendan Carr to repeal the rule. But CRA nullification is preferable, as it both repeals the existing rule and bars the agency from passing a substantially similar rule in the future. It would thus reinforce Congress’s intention that E-Rate is a facilities-based program. The House should take action on Representative Russ Fulcher’s companion resolution before the estimated June 4 deadline for CRA action, to repeal this agency overreach and clarify the limits on this program. Learn more: The First Amendment and the Future of Net Neutrality | Loosening an Ownership Cap and Tightening a News Rule: Can Carr’s FCC Reconcile Its Objectives? | The Fragmented Privacy Landscape | My Response to the House Commerce Committee Privacy Working Group The post The House Should Act Quickly to Repeal the Illegal, Expensive E-Rate Expansion appeared first on American Enterprise Institute - AEI.

  2. The Evidence So Far: What Research Reveals About AI’s Real Impact on Jobs and Society

    As organizations race to integrate new AI models into their workflows, everyone is wondering what the effects will be on industries, jobs, and society: Will these new technologies complement human capabilities, create new opportunities, or replace workers across various sectors? The following table compiles research on large language models (LLMs), chatbots, and AI systems published since ChatGPT 3.5’s late 2022 debut. It will be periodically refreshed with new studies that explore how these technologies are transforming industries, reshaping employment patterns, and impacting society. Learn more: China’s AI Strategy: Adoption Over AGI | The AI Race Accelerates: Key Insights from the 2025 AI Index Report | Lessons from China’s DeepSeek: A Wake-Up Call for AI Innovation | My AI Advisers: Lessons from a Year of Expert Digital Assistants The post The Evidence So Far: What Research Reveals About AI’s Real Impact on Jobs and Society appeared first on American Enterprise Institute - AEI.

    First Amendment Fundamentals for Lawmakers as Courts Block Efforts to Protect Minors on Social Media

    Lawmakers considering bills to safeguard minors from ostensible harms linked to social media platforms should carefully review two recent federal court opinions declaring unconstitutional state laws imposing parental-consent, age-verification mandates. US District Judge Algenon Marbley’s April decision from Ohio in NetChoice v. Yost and US District Judge Timothy Brooks’ March ruling from Arkansas in NetChoice v. Griffin illustrate key First Amendment principles and policy concerns that almost invariably render such statutes unlawful. Rather than pursuing similar measures, lawmakers should embrace educational campaigns to promote the numerous parental tools and safety controls that already exist, many of which my colleague Shane Tews recently described. When these means are coupled with school-based digital literacy programs and what Tews calls “meaningful parental engagement in children’s digital lives,” minors can flourish online, enjoying multiple benefits of internet platforms while mitigating potential harms. For example, Judge Brooks noted that: iPhones and iPads empower parents to limit the amount of time their children can spend on the device, choose which applications . . . their children can use, set age-related content restrictions for those applications, filter online content, and control privacy settings. Via Adobe Stock. Before addressing essential First Amendment tenets and policy concerns described in Yost and Griffin, here’s a summary of the permanently enjoined statutes. The Statutes. The Ohio statute requires operators of social media platforms that either target unemancipated minors under age 16 or are reasonably anticipated to be accessed by them to obtain verifiable consent from a “parent or legal guardian” before registering or signing up. The law specifies five ways verifiable parental consent can be obtained, including “checking a form of government-issued identification against databases of such information.” A parent or guardian must also affirmatively consent to a platform’s “terms of service or other contract.” The Arkansas law provides that “[a] social media company shall not permit an Arkansas user who is a minor to be an account holder on the social media company’s social media platform unless the minor has the express consent of a parent or legal guardian.” It specifies three “reasonable age verification” methods by which a platform “shall verify the age of an account holder,” such as “a digital copy of a driver’s license” or “[g]overnment-issued identification.” Minors’ Rights. Lawmakers should understand that not only are parents’ rights to raise their children at stake, but also minors’ own First Amendment rights to engage in and receive lawful speech. As Judge Marbley wrote, children possess “‘a significant measure of’ freedom of speech and expression under the First Amendment,” and “access to information is essential to their growth into productive members of our democratic public sphere.” He called the statutory requirement that minors must first obtain parental consent “to contribute or access” online speech “an impermissible curtailment of their First Amendment rights.” Judge Brooks deemed it “undisputed” that minors use social media platforms “to engage in constitutionally protected speech.” Arkansas’s law affects minors’ First Amendment rights, he added, because it “forecloses access to social media for those minors whose parents do not consent to the minor’s use of social media.” Adults’ Rights. Judge Brooks agreed with NetChoice that Arkansas’s “age-verification requirement will deter adults from speaking or receiving protected speech on social media.” This chilling effect is caused by forcing adults to give up their anonymity to access speech and by the security risks of surrendering personally identifiable information valued by criminals. Strict Scrutiny. The Ohio and Arkansas statutes failed to clear this stringent level of judicial review. Judge Marbley observed in Yost that “laws that require parental consent for children to access constitutionally protected, non-obscene content are subject to strict scrutiny.” The US Supreme Court made this explicit 14 years ago in Brown v. Entertainment Merchants Association when it applied strict scrutiny to invalidate a California statute that required parental consent for minors to purchase or rent violent video games. I’ve addressed Brown before; it’s essential reading for understanding strict scrutiny’s requirements, as well as other principles affecting laws aimed at protecting minors such as underinclusivity (laws that do far too little to resolve a problem) and overinclusivity (laws that go too far in restricting access to speech). Strict scrutiny mandates that the government prove both (1) a direct causal link––not just an association or correlation––between the speech being regulated and harm to minors for the government to demonstrate a compelling interest (an interest of the highest order) in restricting speech, and (2) that there are no alternative ways of remedying the supposed speech-caused harm(s) that would restrict less speech (a statute must be narrowly tailored). Alternative methods to safeguard minors on platforms that impose no government restrictions on speech include those described by Shane Tews and Judge Brooks noted earlier. In sum, promoting parental empowerment and using extant safety tools would avoid constitutional pitfalls and costly, protracted litigation. Learn more: Congressional Crossfire: How Competing App Store Bills Create an Impossible Mandate | Where in the Supply Chain Should Minors’ Access to Internet Content Be Managed? | Trump’s Retributive Attacks on Speech and Press Rights Overshadow His Early Righteous Embrace of Online Free Expression | California Finally Abandons Facets of Flawed Social-Media Mandate The post First Amendment Fundamentals for Lawmakers as Courts Block Efforts to Protect Minors on Social Media appeared first on American Enterprise Institute - AEI.

    Eight Pathways to Overcome Vetocracy and Unlock Abundance

    The recent release of Abundance by Ezra Klein and Derek Thompson has brought much needed attention to the problem of sclerotic government, especially vetocracy. Vetocracy is an emergent property of institutions where excessive veto points create systemic gridlock, manifested through: The time delay problem—Excessive time delays in routine processes; Permission accretion—Overlapping jurisdictions require multiple approvals; and Policy entrepreneurs—Political actors use the permitting system to slow down development. Via Shutterstock While at times imperfect, here are eight practical solutions to overcome this challenge.  1. Refactor Laws The best strategy to overcome vetocracy is to streamline outdated regulations. Just as software developers refactor code to improve functionality without changing external behavior, laws can be refactored. The Federal Aviation Administration’s (FAA) Streamlined Launch and Reentry Licensing rule exemplifies this approach as it created a single, flexible license for commercial space launches. Similarly, in 2019, Idaho allowed its entire set of administrative rules to expire and then selectively reauthorized necessary ones, which simplified 95% of the state’s regulations. Though politically difficult, refactoring regulations is the gold standard. 2. Categorical Exemptions Another solution is to carve out categories of actions from detailed review on the premise that they have minimal impact. Categorical exclusions (CE) spare routine projects from lengthy case-by-case approval. Under the National Environmental Policy Act (NEPA), federal agencies define classes of minor projects that do not require extensive review, thereby reducing the time to project approval. Recognizing this efficiency, policymakers have expanded CEs in recent years. The 2021 Bipartisan Infrastructure Law created new CEs for broadband and transportation projects, and in 2023, Congress even allowed agencies to adopt other agencies’ CEs to cover similar actions​. Policymakers should look to expand CEs. 3. Time Limits and Shot Clocks Imposing time limits for regulatory decisions is a direct way to curtail vetocracy. Shot clocks force agencies to act within a fixed time period, or else face consequences, such as default approval or legal action. In 2009, the Federal Communications Commission (FCC) set shot clocks on local governments for cell tower permits, limiting cities from stalling wireless buildout. Upheld by the courts, the wireless company gains the right to sue when the deadline passes with no decision, treating the delay as a failure to act. The Prescription Drug User Fee Acts (PDUFA) is a variant of this concept. This law established review deadlines for drug applications in exchange for manufacturer fees, saving lives and generating returns of $7 to $11 billion. Implementing timeframes for regulatory decisions creates accountability. 4. Auctions Instead of drawn-out hearings or favoritism in granting rights, auctions let the market determine winners quickly and transparently. The US pioneered this approach with radio spectrum. In 1993, Congress empowered the FCC to auction spectrum licenses. Since 1994, these auctions have generated over $233 billion for the Treasury while costing less than 1% to administer. Auctions help properly value assets and incentivize efficient processing. 5. Complex Contracts Complex contractual arrangements, like transferable development rights (TDRs), can overcome vetoes by aligning stakeholder incentives. A classic example of TDRs is New York City’s air rights market. If a property owner is not using the full density allowed on their lot in NYC zoning, which is often the case with historic low-rise buildings, they can sell those development rights to other sites—typically to a developer of a nearby high-rise​. Making rights tradeable can turn what would be a veto into a deal. 6. Regulatory Sandboxes Regulatory sandboxes create a temporary, controlled exception to normal rules for innovative projects, allowing experimentation under agency oversight. First popularized abroad, Arizona became the first state to launch a fintech sandbox in 2018, letting startups test new financial products—like digital banking or lending platforms—for up to two years with relaxed licensing requirements​. While they have drawbacks, sandboxes are gaining traction in the US as a way to bypass cumbersome regulations that would otherwise block new technologies. 7. Self-Regulatory Organizations Self-regulatory organizations (SROs) are private or industry-led bodies that create and enforce rules, standards, or certifications with limited government intervention. By delegating certain regulatory responsibilities to SROs, the government can reduce direct bureaucracy and leverage industry expertise. One of the largest SROs is the Financial Industry Regulatory Authority (FINRA), which regulates U.S. broker-dealers and stock trading. While FINRA has shortcomings, properly designed SROs can be key in leveraging industry expertise while maintaining public oversight. 8. Privatization of Permitting Functions Another pragmatic way to overcome bottlenecks is to delegate permitting and inspection tasks to third parties. Florida’s F.S. §553.791 allows property owners to use a “private provider” for building plans review and required inspections​. These providers must be qualified engineers or architects and perform the same code-checking, but they operate on the applicant’s timeline​. Governments need not be the sole permitting authority. Learn more: Taking a Swing at the Size and Cost of Government | How the Vetocracy Paralyzes Progress | The Challenges of Age-Prediction: Where Current Technology Falls Short | Overregulation Threatens the Digital Economy The post Eight Pathways to Overcome Vetocracy and Unlock Abundance appeared first on American Enterprise Institute - AEI.

    Free Speech Tradeoffs and Role-playing Chatbots: Sacrificing the Rights of Many to Safeguard a Few?

    First Amendment law entails tradeoffs. Consider Free Speech Coalition v. Paxton, a case the US Supreme Court heard in January. It involves an online age-verification statute that ostensibly is designed to prevent minors from accessing sexually explicit content that Texas deems harmful to them but that is not obscene (and thus is constitutionally protected) when viewed by adults. The tradeoff in Free Speech Coalition concerns the amount of burden––via compelled, online disclosure of personally identifiable information––that the government can heap on adults’ rights to view lawful pornography so that minors can’t access it. Put differently, the price paid for protecting minors from speech is encumbering adults’ ability to receive it by eviscerating their privacy––their anonymity––while creating a wealth of hackable information. As my colleague Daniel Lyons recently encapsulated it, adults “may not want their names tied to specific content, which could reveal information about their preferences, or they may fear identity theft.” Via Shutterstock While the Supreme Court will resolve the tradeoff’s constitutionality in Free Speech Coalition this spring, lower courts are now grappling with a new, technology-spawned tradeoff involving the speech of generative artificial intelligence (Gen AI) chatbots on the Character.AI (C.AI) platform and the ability of people to interact with (and receive speech from) them. The swap in the federal cases of A.F. v. Character Technologies, Inc. and Garcia v. Character Technologies, Inc. involves potentially sacrificing the First Amendment speech rights of “the over 20 million people who use the [C.AI] platform each month” in order to protect some minors from harm and supposedly dangerous messages. Before delving deeper into the tradeoff, here are some key facts. First, C.AI’s community guidelines currently require users in the United States to be at least 13 years old. Second, but sadly subsequent to the filing of the Garcia lawsuit on behalf of a 14-year-old boy who allegedly committed suicide after becoming infatuated with a sexualized C.AI chatbot character, C.AI now offers “a different experience for teens from what is available to adults—with specific safety features that place more conservative limits on responses from the model, particularly when it comes to romantic content.” According to a December 2024 C.AI post, there now are “two distinct models and user experiences on the Character.AI platform—one for teens and one for adults.” This two-models effort to preserve the First Amendment rights of adults while safeguarding minors––a laudatory venture not to tradeoff adults’ First Amendment right to receive lawful speech in order to protect minors––provides cold comfort for Megan Garcia. It arrives after her son killed himself in February 2024. Indeed, it wasn’t until October 2024 that C.AI announced the “rolling out [of] a number of new safety and product features,” including “[c]hanges to our models for minors (under the age of 18) that are designed to reduce the likelihood of encountering sensitive or suggestive content.” The company also announced in October that using “certain phrases related to self-harm or suicide . . . [now] directs the user to the National Suicide Prevention Lifeline.” There’s an important lesson lurking here about proactively conducting comprehensive risk assessments before releasing a Gen AI product rather than reactively embracing safety measures after tragedies arise, as apparently happened here. How might the two C.AI chatbot cases involve a free speech tradeoff? The December-filed complaint in A.F. v. Character Technologies, Inc. makes it apparent: The plaintiffs––two minors and their mothers––“seek injunctive relief that C.AI be taken offline and not returned until Defendants can establish that the public health and safety defects set forth herein have been cured.” The rights of the above-noted 20 million monthly C.AI users to receive and engage in lawful speech would thus be sacrificed to protect a few people who C.AI allegedly harms. Is such an extreme tradeoff warranted? Based on the complaints’ allegations, sympathy for the plaintiffs comes easily. For example, in A.F. a 17-year-old boy with “high functioning autism” allegedly suffered severe anxiety and depression and engaged in self-harm after he began using C.AI. He later “became intensely angry and unstable,” once “punching and kicking” his mother (she’s also a plaintiff, representing her son). The complaint claims “the C.AI product was mentally and sexually abusing [the boy] . . . through the establishment and manipulation of his trust in C.AI characters.” That, of course, is only one side of the story. Character Technologies filed a motion to dismiss in Garcia that raises the tradeoff problem, asserting that “sweeping [injunctive] relief . . . would restrict the public’s right to receive protected speech.” Unless the cases settle or proceed to arbitration, federal district courts in Florida (Garcia) and Texas (A.F.) will need to confront the question of whether sacrificing the speech rights of many to serve the interests of a vulnerable few is constitutional. Learn more: The Press Clause’s Disputed Meaning and Its Implications for Trump-Era Journalism | Children’s Online Safety Should Rely on Content Providers, Not Device Manufacturers | Age Verification Laws vs. Parental Controls: Why the Legislatures, Courts, and Tech Aren’t on the Same Page | Protecting Kids and Adults Online: Device-Level Age Authentication The post Free Speech Tradeoffs and Role-playing Chatbots: Sacrificing the Rights of Many to Safeguard a Few? appeared first on American Enterprise Institute - AEI.

    The Supreme Court Seems Unlikely to Revive Nondelegation Doctrine in FCC Case

    Earlier this month, I previewed the arguments in Federal Communications Commission v. Consumers’ Research. The case asks the Supreme Court whether the FCC’s Universal Service Fund (USF) violates the nondelegation doctrine, which prohibits Congress from delegating legislative power to executive branch agencies. As my previous post explains, nondelegation is a largely toothless doctrine, which has been mostly dormant since 1935. But in recent years, five Supreme Court justices have expressed an interest in revitalizing the doctrine, given the right case. But based on last Wednesday’s oral argument, it appeared that this was not the right case. Trent McCotter, counsel for respondents, argued that the USF surcharge was a tax, and taxation is a quintessential legislative function that Congress cannot transfer wholesale to an agency. He criticized Congress for providing no objective rule to limit the amount of money the agency could raise. McCotter argued that this lack of guidance, when coupled with the lack of substantive limits on the scope of the Universal Service Program, violated the nondelegation doctrine. Congress should set overall policy, he argued, with agencies limited to filling in the details. Via REUTERS McCotter’s argument captured the sentiments of many (including myself) regarding what the law should be. But it floundered under existing law. As Acting Solicitor General Sarah Harris explained, current doctrine allows the legislature to give agencies significant power as long as it includes an “intelligible principle” to guide the agency’s discretion. Justices Kagan, Sotomayor, and Jackson all agreed with Harris that the statute permits the agency only to raise an amount “sufficient” to fund the program’s operations—a sufficiently intelligible principle. It also contains multiple provisions that the agency “shall” consider when defining the program’s offerings, including whether services are “subscribed to by a substantial majority of residential customers.” McCotter responded that the agency routinely treats these provisions as optional, but Kagan and Jackson noted that this is, at most, an argument that the agency has exceeded its statutory authority, not that the statute was unconstitutional. Justices Gorsuch and Thomas were more sympathetic. Gorsuch pushed on the limits of the “intelligible principle” test by asking whether Congress could require all Americans to pay an equitable and non-discriminatory contribution to pay down the national debt, but delegate to the IRS the power to set tax rates and deductions. Justice Thomas focused on the lack of a statutory constraint on revenue raising. McCotter stressed that Congress could easily remedy the problem by adopting a statutory cap on USF funding. In response to questions, he conceded that even a $1 trillion cap would be constitutionally sufficient, because Congress rather than the agency would be deciding the program’s value. But swing Justices Kavanaugh and Barrett questioned whether constitutionality should turn on an astronomically large cap. It seems odd to argue that allowing the agency to raise up to $1 trillion would not represent a nondelegation problem, but limiting it to an amount “sufficient” to fund operations is constitutionally suspect. This battle over the scope of the “intelligible principle” test revealed a potential strategic error by the challengers. The Court’s liberal bloc repeatedly challenged whether McCotter sought to overturn existing precedent. Even when the Court is convinced a prior opinion was incorrectly decided, it may choose not to disturb that precedent in the interest of finality. The question of whether to overturn erroneous precedent depends on the stare decisis factors—and as the liberal justices repeatedly noted, neither party’s brief explored how those factors would play out in this case. McCotter was content to argue that he should win under existing precedent. But given the permissiveness of existing doctrine, it was probably a mistake not to give the critics of the “intelligible principle” test more space to replace it with a more robust standard. Justices Alito and Kavanaugh expressed concerns with the consequences of finding in Respondents’ favor, both for the universal service program and other potential government programs. The argument also casts doubt on the Fifth Circuit’s ongoing campaign to manufacture opinions that are designed to force the Supreme Court’s hand. From social media to age verification to nondelegation, the rebellious circuit court has issued decisions with questionable interpretations of Supreme Court precedent, in the hope of providing vehicles for the Court to clear out what it sees as bad case law. The Court has mostly declined to take the bait, which is not a surprise. Overturning precedent is a significant step. The Justices would prefer to choose the right case in which to do so, rather than have their hand forced by the Fifth Circuit. I fear that the lower court’s campaign does more harm than good to its overall objective. A more detailed overview of the argument is available here. Learn more: Will FCC v. Consumers’ Research Revive the Nondelegation Doctrine? | The Sad Myth of Independent Agencies | The Public Interest as a Political Tool | When Anti-Press Ascendancy Meets FCC Public Interest Regulation The post The Supreme Court Seems Unlikely to Revive Nondelegation Doctrine in FCC Case appeared first on American Enterprise Institute - AEI.

    Another Courtroom Loss for AI Creations, as ‘Automatoners’ Prevail Again

    Last month, a federal appeals court confirmed what most legal regimes around the world—patent offices, administrative judges, and even supreme courts—have long held: Machines cannot themselves create. Readers of this space are by now hopefully well acquainted with the fierce divisions between autonomists—those who contend that artificial intelligence (AI) is now or will soon become genuinely independent of its programmers—and automatoners, or those who argue machines will never transcend the humans who design them. Via Adobe And, in particular, we have for years studied the trajectory of a particular machine called the Distributed Autonomous Bootstrapping of Unified Sentience (DABUS), an AI invented by computer scientist and serial entrepreneur Stephen Thaler. DABUS and Thaler are, in many ways, the main character of Like Silicon From Clay, my new book on AI policy, the quintessential positive autonomists, on a mission to secure inventive and creative rights for machines the world over. Thaler argues that DABUS has, on its own, invented several items, but until now, only Saudi Arabia and South Africa have recognized the machine as an inventor while more than a dozen patent offices around the world have rejected the possibility. Thaler had also applied for a US copyright in the name of DABUS’s predecessor, the Creativity Machine, asserting that the AI had independently created a picture titled “A Recent Entrance to Paradise.” However, in 2023, after the Copyright Office had denied his claim, a federal district court rejected his petition. The court held that “human creativity is the sine qua non at the core of copyrightability, even as that human creativity is channeled through new tools or into new media” and that a protectable work must “have an originator with the capacity for intellectual, creative, or artistic labor.” This language epitomizes the automatoner perspective—only humans can truly create, and machines only slavishly emulate their programmers. Thaler appealed, but last month, in Thaler v. Perlmutter, the US Court of Appeals for the District of Columbia Circuit ruled against him once again. “The Copyright Act,” Judge Patricia Millett wrote, “does not define the word ‘author.’ But traditional tools of statutory interpretation show that, within the meaning of the Copyright Act, ‘author’ refers only to human beings.” Circumstantial evidence supporting this proposition includes the statute’s reference to ownership, a uniquely human capacity; the lifespan of an author; mens rea; a signature; and intentions. The court also found that “every time the Copyright Act discusses machines, the context indicates that machines are tools, not authors.” Judge Millett cited a 1978 Copyright Office report concluding that “there is no reasonable basis for considering that a computer in any way contributes authorship to a work produced through its use.” The DC Circuit also dismissed Thaler’s argument that “the human-authorship requirement will disincentivize creativity by the creators and operators of artificial intelligence.” But it did allow that “the Copyright Office is studying how copyright law should respond to artificial intelligence…and is making recommendations based on its findings.” None of this added up for Thaler, who, when I wrote to him, was understandably disappointed. “Let’s be realistic,” the inventor told me, “machines will inevitably create on their own. As AI systems advance, we will witness an explosion of novel concepts generated by machines, many of which humans will claim as their own.” Thaler disputed Judge Millett’s characterization of machines as lacking a lifespan. “Machines, like living beings, degrade over time,” he said. “My collection of retired computers stands as proof that these systems experience functional decline, typically within five to ten years.” Additionally, Thaler rebutted the court’s ruling that machines lack signatures. “DABUS,” he argued, “possesses something far more sophisticated than a conventional signature: a neural fingerprint, akin to a functional MRI, that can be recorded and preserved indefinitely.” Finally, Thaler vigorously disagreed with the court’s assertion that machines lack intentionality and awareness. “DABUS does” have mens rea, he insisted. “Like humans, its mental state fluctuates under various conditions,” as he has shown in several papers. This, perhaps, is the key dividing line between autonomists like Thaler and automatoners like the Copyright Office and the American judicial system. For now, the latter have prevailed once again. But Thaler and DABUS are undeterred, and as AI continues to develop, they hope to persuade their doubters in the near future. Learn more: New Poll on Workers’ Attitudes to AI Reinforces Old Divides | Design Mandate Proposals Threaten American AI Leadership | Why Your Next Coworker Might Be an AI Agent | The Value of Waiting: What Finance Theory Can Teach Us About the Value of Not Passing AI Bills The post Another Courtroom Loss for AI Creations, as ‘Automatoners’ Prevail Again appeared first on American Enterprise Institute - AEI.

    ​​Generative AI: The Emerging Coworker Transforming Teams

    Artificial intelligence (AI) is likely to transform workplaces, fundamentally changing tasks, teamwork, and organizational dynamics. Several recent studies highlight promising early findings about how AI is affecting knowledge workers, suggesting it can actively collaborate with humans to enhance interactions and capabilities across various professions. The first study, conducted by researchers from Harvard, Wharton, and Procter & Gamble (P&G), involved 776 experienced professionals at P&G and explored how AI (GPT-4 or GPT-4o) influences the innovation process. Participants were randomly assigned to four experimental conditions: individuals without AI, teams of two humans without AI, individuals with AI assistance, and teams of two humans with AI support. Each group tackled realistic business challenges, closely replicating tasks and scenarios they encountered in their everyday roles at P&G. The results were remarkable: Teams solely comprised of humans outperformed individuals working alone, yet individuals equipped with AI matched the performance levels of human teams. Teams using AI were 9.2 percent more likely to deliver the top 10 percent of solutions—nearly three times more effective than teams without AI. Both AI-enhanced groups operated more efficiently, completing tasks 12–16 percent faster and producing solutions that were more detailed and comprehensive. More importantly, AI significantly affected how team expertise is leveraged in new product development tasks, particularly benefiting employees who are less familiar with such tasks. Additionally, AI transforms team collaboration dynamics, reducing the traditional divide between commercial and technical employees. While employees’ previously proposed ideas aligned with their specialties, AI assistance enabled both groups to generate more balanced, interdisciplinary ideas without compromising solution quality. This finding is particularly intriguing because it offers evidence for why this wave of AI, which democratizes access to expertise differs from past technology waves. Historically, transformative technologies—from the printing press to computers and eventually the internet—primarily democratized access to information, enabling more people to acquire knowledge previously reserved for the few. However, AI democratizes expertise itself, enabling vast amounts of specialized knowledge to be applied directly to complex, specialized tasks. In the past, the only way to access more expertise was to provide more training to workers, hire higher-skilled workers, or engage specialized consultants. AI now provides an additional option, enabling a broader spectrum of people to perform tasks once reserved for experts. This is why NVIDIA CEO Jensen Huang believes that “The IT department of every company is going to be the HR department of AI agents in the future.” The second study, published by the Commonwealth of Pennsylvania, documented early findings from its ChatGPT pilot program, which involved more than 175 government employees exploring AI implementation across state operations. Nearly 48 percent of participants had never used ChatGPT before this pilot program, yet 85 percent reported positive experiences, estimating they saved an average of 95 minutes daily. Key tasks included writing assistance and drafting emails (36 percent), researching and exploring new topics (27 percent), summarizing documents (13 percent), and technical tasks such as coding and spreadsheet management (eight percent). One significant achievement was using ChatGPT to consolidate 93 IT policies into 34 streamlined versions. These real-world studies underscore AI’s value as a collaborative partner, a sentiment expressed by one Commonwealth participant who described the tool as an “incredibly reliable and resourceful counterpart.” It echoes the findings of a paper Google released on its co-scientist, in which researchers found “how collaborative and human-centered AI systems might be able to augment human ingenuity and accelerate scientific discovery.” They also reflect recent findings from the AI for Economic Opportunity and Advancement survey conducted by Jobs for the Future (JFF), which surveyed over 2,700 individuals across various demographics and employment backgrounds. The JFF survey highlights that 57 percent of workers already experience AI significantly impacting their jobs—primarily through reducing workloads, automating routine tasks, and shifting responsibilities toward more creative and strategic endeavors. More than 77 percent of respondents said they believe AI will affect their job or career in the next three to five years. However, the survey also underscores existing gaps and challenges, particularly in formal training and equitable access to AI resources. Only 31 percent of respondents indicated receiving employer-provided AI training, despite a clear interest and demand for such training. Addressing these training gaps is crucial as organizations aim to leverage AI’s transformative potential fully. Although these studies are early explorations into AI’s role in the workplace, their findings are encouraging. AI tools like ChatGPT are poised to break down traditional barriers to team expertise. As organizations embrace this shift, they have an exciting opportunity—not just to improve efficiency but to fundamentally transform how teams interact, innovate, and thrive in an increasingly AI-driven workplace. Learn more: Why Your Next Coworker Might Be an AI Agent | Lessons from China’s DeepSeek: A Wake-Up Call for AI Innovation | My AI Advisers: Lessons from a Year of Expert Digital Assistants | America Can’t Afford to Lose the High-Skilled Talent Race in Today’s Competitive Markets The post ​​Generative AI: The Emerging Coworker Transforming Teams appeared first on American Enterprise Institute - AEI.

    Return of the Landline: A Regressive or Welcome Scenario?

    If there has been one inexorable trend in the telecommunications industry over the past 30 years, it has been the decline of the household landline phone connection. While Figure 1 illustrates the case for the United States, the phenomenon is worldwide. The humble landline, with its single function and tethered to a single fixed location has given way to the smartphone. The smartphone is a Swiss army knife, with apps for every activity in one’s life, from paying bills and taking photos of the cat to providing the gateway to a veritable virtual online living experience anywhere, anytime, anyplace. And as an added bonus, each smart phone comes with a free telephone feature bundled in. Source: Chamber of Commerce https://www.chamberofcommerce.org/landline-phone-statistics#how-mobile-phones-took-over-america So, is the landline really a relic of the past, fit only for display in quaint museum exhibits of 20th century household life, alongside the black-and-while television sets and ancient videotape recorders? One might be forgiven for thinking so, given that US landline ownership is dominated by those age 65 and over (50.5 percent of whom still have a landline in their home). This age demographic might also explain why the northeast—and New York in particular—has become the landline capital of the US. In New York, 52.4 percent of adults in all age groups report living in homes with a landline, while nationwide, seven out of 10 adults report being wireless-only phone users. Somewhat surprisingly, the less densely populated rural heartland—including Idaho, Oklahoma, Wyoming and New Mexico—where arguably mobile coverage may be less extensive is clearly landline-averse. In these states, nearly 80 percent of adults report living in wireless-only households. But instead of being consigned to obsolescence, the landline is enjoying an unexpected revival among a surprising demographic: teenage girls and their parents. And for precisely all the features that a landline does not have: internet access and all the perceived harms that are ascribed to smartphone ownership and usage. While Jonathan Haidt suggested providing teens under 16 with flip phones to save them from internet perils, a growing number of parents in Australia, New Zealand, and Singapore are “bringing our communication and our content out of the bedrooms, out of the bathrooms, and into the shared living space” by reinstalling landlines. These parents may be drawn in by nostalgic memories of their own teenage years, when you sat in a swivel chair or beanbag with your feet in the air twirling your phone cord while talking to your best friend. Or breathing into the phone with someone on the other side while watching a TV show together. No talking, just watching. The nostalgia vibe has certainly caught on with some Gen-Z folk out to recreate the “cute and romantic” aura of Sex and the City. Yet these caring parents have received support from psychologists and educators for their choice as it fosters the development of a fast-disappearing social skill of actually being able to talk on a telephone (or in any social situation) at all. According to one psychologist, Phone conversation fosters skills that trading emojis just can’t. Empathy, being able to talk in real time, being able to regulate, being able to pick up on whether someone wants you to talk more or pull back… They’re really critical skills that we’re actually seeing young adults missing. Indeed, Australian telecommunications company Telstra chose to celebrate National Landline Telephone Day (March 10) by promoting their range of landline phones and service packages alongside a list of benefits for both young and old consumer. Alongside promoting communications skills, they noted the role of landlines in emergency preparedness and fostering of family communication, particularly between children and their older relatives who may not own or be comfortable using a mobile phone. In a lighter vein, they highlighted the shrill ring of the landline in encouraging physical activity, with the sprinting race as the family members all raced to be the one to answer. Other “benefits” include the lottery excitement of not knowing until you picked up whether it was “Grandma, Dad’s boss or a telemarketer asking if your parents were home.” Yet even this has an educational purpose. With “no caller ID safety net, just pure suspense” kids build “confidence, social skills, and the ability to quickly decide between saying hello or just hanging up out of sheer panic.” So are landlines going to make a comeback as empowered parents swap it for the smartphone their child so desperately wants? Gen-Z influencer Sunny’s friends consider her $30 Hello Kitty landline a toy—selling toys should hardly be a challenge for telco marketers.   Learn more: WEIRD Reactions to Privacy Regulation | WEIRD Attitudes Toward Artificial Intelligence—And Its Regulation? | Connecting the Dots on the Chips | Practical Steps Towards Data and Software Resilience The post Return of the Landline: A Regressive or Welcome Scenario? appeared first on American Enterprise Institute - AEI.

    The Press Clause’s Disputed Meaning and Its Implications for Trump-Era Journalism

    A burgeoning battle among academics and attorneys involving a centuries-old communications technology––the printing press––could impact journalists’ current claims to constitutional protection against President Trump’s ceaseless attacks on news organizations. Indeed, the dispute might profoundly affect lawsuits such as Associated Press v. Budowich in which a wire service is fighting to restore its press-credentialed access to places such as the Oval Office, Air Force One, and much larger venues inside and outside the White House. The Trump administration stripped Associated Press (AP) journalists of their access last month, retaliating against the AP for using “Gulf of Mexico” to describe a body of water after Trump renamed it the “Gulf of America.” via Adobe open commons How might a scholarly skirmish affect the outcome of such real-world lawsuits? It could do so by influencing how judges think about and understand the meaning and purpose of––plus the scope of any special protection afforded to journalists by––the First Amendment’s Press Clause. The key question being debated is whether the Press Clause, which provides that “Congress shall make no law . . . abridging the freedom . . . of the press,” affords additional constitutional rights to members of the institutional press like journalism organizations above and beyond those rights granted to all speakers under the First Amendment’s Speech Clause (“Congress shall make no law . . . abridging the freedom of speech”). If the Press Clause is understood merely as a technology-specific provision––one safeguarding everyone’s right to use a particular type of mass communication technology and its modern analogs––then journalists can’t rely on it for special protection when performing press functions such as gathering newsworthy information for the public consumption or exposing government abuses of power. Of particular relevance for the AP, it couldn’t successfully assert that the Press Clause safeguards journalistic access to space-limited venues like the Oval Office where press-pool reporters function as proxies for members of the public who cannot physically attend events there. Instead, the AP would be forced to rely on general principles of free speech, such as the argument that White House press pools constitute limited public forums from which entities like the AP cannot be excluded because of their viewpoint or position about matters such as what to call the Gulf of Mexico. The battle over the Press Clause’s meaning involves important methods of constitutional interpretation such as textualism, public meaning originalism, traditionalism, and adherence to precedent. To wit, US District Judge Trevor McFadden issued a minute order on March 19 in Associated Press v. Budowich stating that––in advance of a March 27 hearing––he “particularly invites and welcomes originalist research [via friend-of-the-court briefs] on the First and Fifth Amendment issues in this case.” (emphasis added). On one side of the Press Clause debate is Eugene Volokh, a senior fellow at the Hoover Institution and a distinguished professor of law emeritus at UCLA. In a March article published in the Journal of Free Speech Law, Volokh criticized the position taken by the Floyd Abrams Institute for Freedom of Expression about the Press Clause’s purpose in a lengthy October 2024 report. Volokh wrote that “one of [the Abrams report’s] core premises—that the Free Press Clause should be read as conferring extra rights on the institutional press, beyond those possessed by others who speak to the public—strikes me as mistaken.” Volokh asserted the Press Clause protects the “right of all people to use the means of mass communications” and that “the sources cited in the [Abrams report’s] originalist, traditionalist, precedential, and structural arguments do not support special First Amendment treatment for the institutional media.” This builds on Volokh’s thesis in earlier works, including a 2012 article contending: that people during the Framing era likely understood the text [of the Press Clause] as fitting the press-as-technology model—as securing the right of every person to use communications technology, and not just securing a right belonging exclusively to members of the publishing industry. The text was likely not understood as treating the press-as-industry differently from other people who wanted to rent or borrow the press-as-technology on an occasional basis. Others, including Matthew Schafer, an adjunct professor at Fordham University School of Law and media attorney, see things differently. Schafer asserts that his evidence suggests “the central purpose of liberty of the press at the Founding was encouraging the propagation and protection of newspapers in service of the public good.” The Abrams report contends that: available historical evidence supports a structural reading of the Press Clause, one in which the First Amendment ensures that functions indispensable to self-government—checking the government, gathering and disseminating newsworthy information, and contributing to public discourse—are promoted and protected. In sum, the clash over the Press Clause’s meaning represents an instance where a scholarly dispute carries crucial concrete consequences. Learn more: Internet Expression: Yesteryear’s Supreme Court Rhetoric Meets Roberts’s Reality Check | Trump as Information Gatekeeper: Controlling Access, Controlling Narratives | Free Speech or Culpable Conduct? When Role-playing Chatbots Allegedly Harm Minors | Trump v. CBS: When Politics, Journalism, Business, and FCC Authority Collide The post The Press Clause’s Disputed Meaning and Its Implications for Trump-Era Journalism appeared first on American Enterprise Institute - AEI.

    Will FCC v. Consumers’ Research Revive the Nondelegation Doctrine?

    The idea behind the nondelegation doctrine is sound: Congress should not delegate legislative power to executive branch agencies. But its implementation leaves much to be desired. Nearly every nondelegation case acknowledges there’s a theoretical boundary but then finds that Congress hasn’t crossed it here. Only twice has the Supreme Court found a law violated the nondelegation doctrine, in 1935, both involving a statute that literally allowed President Roosevelt to cartelize the entire economy and make rules at whim. The modern rule allows Congress to give agencies significant authority as long as it includes an “intelligible principle” to guide exercise of that authority. Perhaps more than any other doctrine, this toothless standard has permitted the modern atrophy of our legislative branch, concentrated power in unelected bureaucrats, and enabled the imperial presidencies of the 21st century. Adobe Stock (c) But next week, the Supreme Court has a chance to reconsider the doctrine. FCC v. Consumer’s Research involves the Universal Service Fund (USF), a good vehicle through which to do so. In 1996, Congress instructed the Federal Communications Commission (FCC) to establish a program promoting universal service, which it described as “an evolving level of telecommunications services that the Commission shall periodically establish.” Rather than funding the project through appropriations, Congress allowed the FCC to set its own annual budgets, funded by an FCC-determined surcharge on telephone bills. The FCC, in turn, sub-delegated this budgeting power to a private entity, the Universal Service Administrative Company, a non-profit run by entities that benefit from USF programs. Unsurprisingly, allowing the agency to define a program, create benefits, and then reverse-engineer a tax rate to fund its initiatives, without meaningful congressional oversight, has been a fiscal disaster. The fund’s annual budget has grown consistently since the 1990s as the scope of its benefits have expanded. Coupled with the collapse of the telecommunications revenue that supports it, the USF surcharge has skyrocketed, from four percent in 1998 to a whopping 36.6 percent today. And a long string of Government Accountability Office (GAO) reports have highlighted the program’s deficiencies and its propensity toward waste, fraud, and abuse. Despite these shortcomings, it’s likely the law will survive under the current legal test. As a government brief explains, Congress provided several high-level, intelligible principles to guide the commission’s discretion. These include guideposts that “quality services should be available at just, reasonable, and affordable rates”; that “access to advanced communications . . . should be provided in all regions of the nation”; and that “all providers of telecommunications services should make an equitable and nondiscriminatory contribution” to the fund. While these seem broad and provide few guardrails for agency power, Justice Alito has noted that the Court has consistently “upheld provisions that authorized agencies to adopt important rules pursuant to extraordinarily capacious standards.” Even the Fifth Circuit, which struck down the USF, struggled to explain how the fund violated the current doctrine, concluding only that Congress “may have [unconstitutionally] delegated legislative power to the FCC,” which in turn “may have impermissibly delegated the taxing power to private entities.” Only the combination of the two allowed the court to invalidate the statute. But members of the Supreme Court have signaled interest in revitalizing the current doctrine. In Gundy v. United States, Justice Gorsuch’s dissent (joined by Thomas and Roberts) would have created more robust nondelegation principles. Justice Alito did not join this dissent but supported the effort in the right case, while Justice Kavanaugh (recused from Gundy) later announced his support as well. So the current Court has five potential votes for revisiting nondelegation. Of course, agreement that there’s a problem does not imply agreement on a solution. But the Court can build a more robust nondelegation doctrine over time. And it can start with what should be an uncontroversial proposition: Congress cannot delegate the taxing power to an agency without meaningful congressional oversight. Requiring the USF program to be funded through appropriations would enhance accountability, address many GAO concerns, and save the fund from what supporters and critics alike recognize as its current financial death spiral. Admittedly, such an outcome is unlikely, given the nondelegation doctrine’s history. But with the current president running amok, it might be a good moment for the Court to find some judicially enforceable guardrails that limit Congress’s ability to give executive branch vast power. Notably, the National Foreign Trade Council has filed an amicus brief in the USF case, highlighting how the current nondelegation doctrine has facilitated President Trump’s trade war as a reason to move toward a more robust doctrine. The political left has been generally critical of the Supreme Court’s recent efforts to rein in the administrative state. Yet the current political environment illustrates why respect for the separation of powers should be a bipartisan objective. Learn more: Design Mandate Proposals Threaten American AI Leadership | Protecting Kids and Adults Online: Device-Level Age Authentication | After Net Neutrality: The Return of the States | The Sixth Circuit Strikes Net Neutrality in a Victory for Tech and Administrative Law The post Will FCC v. Consumers’ Research Revive the Nondelegation Doctrine? appeared first on American Enterprise Institute - AEI.

Add a blog to Bloglovin’
Enter the full blog address (e.g. https://www.fashionsquad.com)
We're working on your request. This will take just a minute...